Excitation photons are emitted from the microscope via a titanium-sapphire laser in pulses at about 80 MHz, at a certain wavelength $\lambda=990$nm, and at a certain power, $I_0\approx 20$mW. This laser travels through roughly $b_0=350\mu$m of neural tissue to layer 2/3 of the neocortex where the beam is focused enough such that the probability of two photons from consecutive pulses exciting one fluorphor is high enough that it happens regularly.


To model how much of the laser is absorbed in the neural tissue we will employ Beer-Lambert's Law wich determines how much light is transmitted from some material based on how much light as recieved by the material and some properties of the material.

$$\Phi_t = \Phi_ie^{-\mu_a b c} \hspace{10pt} \text{where}$$

$\Phi_i = I_0$ is the amount of light arriving at one end of the material

$\Phi_t$ is the amount of light being transmitted from the other end

$\mu_a$ is molar attenuation, the material's propensity to absorb light (determined below)

$b = b_0$ is the thickness of the sample, also called the "path length" of the material

$c$ is the concentration of the material in the sample that $\mu_a$ is defined for (determined below)

In trying to determine how much light neural tissue absorbs I came across this paper which offers a means of approximating $\mu_a$. Otherwise we could just crudely approximate it by finding the most abundant molecules and assume they would dominate the absorption. For example, 62% of cerebrospinal fluid is composed of albumin, and all membranes are composed of lipid bilayers that all have very similar molar absorption coefficients. Most of this information and more can be found in this book. Or we could neglect this all together. I just wanted to make sure I was aware of the assumptions being made.


Now, assuming we have determined the incident radiant energy reaching the indicator molecules, $\Phi_i$, we can again use Beer-Lambert's Law to approximate how much light is being emitted from the sample region within the point-spread function. Each fluophor has a particular quantum efficiency, $q$, which is the number of photons emitted divided by the number of photons absorbed. So we represent the amount of energy emitted as the product of the incident radiant energy and the quantum efficiency, $q\Phi_i$. However, the indicators also have a propensity to absorb some of the photons that were emitted by its neighbors and that must be accounted for as well, $q\Phi_ie^{-\mu_{a.ind}bc}$. Subtracting the second term from the first gives us an approximation to the fluorescence of a certain sample (or pixel in my case) as

$$F = q\Phi_i(1-e^{-\mu_{a.ind}bc}) \hspace{10pt} \text{where}$$

$F$ is the amount of light being emitted from the sample within the point spread function

$q = 0.610 \pm 0.004$ is the quantum efficiency of GCaMP6s, the indicator used in James' experiments

$\mu_{a.ind} = 2.91 \pm 0.04$ is the molar attenuation of GCaMP6s, the indicator used in James' experiments

$b \approx 3\mu$m is roughly the vertical variability of the point spread function (if I remember correctly)

$c$ is the concentration of indicator in the cell. I could not find a reference yet, but it is my understanding that this is assumed to be more or less equilibrated after some time and to be something that is independent of distance from the injection sight. This is because the indicator is delivered via a virus and once the cell has been infected it begins producing the proteins internally. While cells produce these proteins at different rates from the little bit I could find this variability should be small enough that a decently approximating constant value should exist. Or again, we can ignore this. I just want to make sure I know what I'm ignoring.


Now once the Fluorescence being emitted from the sample has been determined it must pass through the above neural tissue before it can be passed through the PMT and be recorded. So we can repeat the calculations done in the first part but with the emitted Fluorescence substituted in place of the incident randiance.


All that is left is modeling how the microscope transforms the analog fluorescence into a digital signal. I chose to ignore the quantum fluctions during sampling as that was beginning to go over my head. After reading over the microscope manual I believe everything else can be modeled. Starting with the PMT, it is essentially just adding noise so we model it by some function that returns the signal with added noise, $N(F)$. Then, as far as I'm aware, all that is left is to add the offset, $s\approx -1pA$ and multiplying by the gain g, which I couldn't find by examining the microscope. In addition, all of the parameters set for the microscope during each recording are saved in an xml file with the same name as the dataset. Therefore I have access to this data for each dataset if needed. Putting this together, the output fluorescence, $F_f$, that we see in our data would be

$$F_f = gN(F) + s$$

Now with all of the above information we can create some pretty ideal synthetic data. In addition, quite interestingly, we can also do this process in reverse and take a recorded measurement and actually determine the concentration of calcium ions that were present at the recording sight. And I don't mean approximate the number of calcium bound indicator, I mean the actual concentration of calcium that reacted with the indicator! (Thanks to knowing the $K_d$ for GCaMP6s). Let, $C = [\text{Ca}2^+]$, $G = [\text{GCaMP6s}]$, and $G_C = [\text{Ca}2^+\text{ bound GCaMP6s}]$.

$$K_d = \frac{C+G}{G_C}$$

The amount of Ca2+ bound to GCaMP6s is determined by its dissociation constant, $K_d$. This is an experimentally derived value and is known. So solving for the total concentration of Ca2+, we see that if we can detemrine the concentration, $G_C$, and G, then we can explicitly calculate the total amount of calcium in the system, i.e. the amount we observe in the data, AND the free calcium.

$$C = G_CK_d-G$$

Turns out we can determine $G_C$ using Beer Lambert's Law. The $c$ in the formula is the concentration of Ca2+ bound indicator, i.e., $G_C$!

$$F = q\Phi_i(1-e^{-\mu_{a.ind}bG_C})$$

If we solve for c, then we can determine the concentration of the Ca2+ bound indicator from the measurements.

$$G_C = -\frac{\ln(1-\frac{F}{q\Phi_i})}{\mu_{a.ind}b}$$

Therefore, if we can somehow approximate the total concentration of GCaMP6s, then from the measurement data alone we can approximate the total calcium in the system. So the final formula is...

$$[\text{Ca}2^+] = -\frac{\ln(1-\frac{F}{q\Phi_i})}{\mu_{a.ind}b}K_d-[\text{GCaMP6s}]$$

This would be more meaningful for biological measurements. For now however, we will just use it for creating synthetic data.


In [3]:
import numpy as np

def mu(): # Determination of the molar attenuation from the paper cited above

    # u = B*S*uoxy + B*(1-S)*udeoxy + W*uwater + F*ufat + M*umelan + 2.3*(Cb*eb + Cbc*ebc)
    return 75.0

def beersLaw(incidentLight):
    """incidentLight - radiant energy actually leaving the microscope"""

    b = 350.0 # path length, i.e. the thickness of the neural tissue (microns)
    c = 1.0 # concentration of the material in the sample that mu is defined for
    rslt = incidentLight*np.exp(-mu()*b*c) # radiant energy actually reaching the fluorophore
    
    return rslt

class Model:

    def __init__(self, dim=21, thickness=5, outside=1.0, inside=10.0, calAmp=50,
                 fluctuations=0.2, noise=0.03, speed=1.0, breadth=3):

        self.params = {}
        self.params["dim"] = dim
        self.params["processThickness"] = thickness
        self.params["spikeBreadth"] = breadth
        self.params["outsideConcentration"] = outside
        self.params["insideConcentration"] = inside
        self.params["calciumAmplitude"] = calAmp
        self.params["concentrationVariance"] = fluctuations
        self.params["signalNoise"] = noise
        self.params["speed"] = speed

        self.__makeData()

    def __getitem__(self, key):

        try:
            return self.params[key]
        except:
            raise KeyError

    def __setitem__(self, key, value):

        try:
            self.params[key] = value
        except:
            raise KeyError

        self.__makeData()

    def __makeData(self):

        time = int(float(self["dim"] - self["spikeBreadth"])/self["speed"])+1
        self.data = zeros((time, self["dim"], self["dim"]))
        self.fyTrue = zeros((time, self["dim"], self["dim"]))
        self.fxTrue = zeros((time-1, self["dim"], self["dim"]))
        wall1 = self["dim"]/2 - self["processThickness"]/2
        wall2 = wall1 + self["processThickness"]
        self.data[:, :, :wall1] = self["outsideConcentration"]
        self.data[:, :, wall1:wall2] = self["insideConcentration"]
        self.data[:, :, wall2:] = self["outsideConcentration"]

        for i,frame in enumerate(self.data):
            d = int(i*self["speed"])
            frame[d:d+self["spikeBreadth"], wall1:wall2] = self["calciumAmplitude"]
            self.fyTrue[i, d:d+self["spikeBreadth"], wall1:wall2] = self["speed"]
        self.fyTrue = self.fyTrue[:-1]
        self.data += (2*random.random((time, self["dim"], self["dim"]))-1)*self.data*self["concentrationVariance"]
        self.data += (2*random.random((time, self["dim"], self["dim"]))-1)*self["signalNoise"]*self["calciumAmplitud
e"]

    def run(self):

        self.calcFlow()
        self.show()
        self.error()

    def calcFlow(self, relative=True, blur=(0,0,0), parameters=None):
        flowParams = {'pyr_scale':0.5, 'levels':3, 'winsize':7, 'iterations':3, 'poly_n':5,
                      'poly_sigma':1.1, 'flags':cv2.OPTFLOW_FARNEBACK_GAUSSIAN}
        flowParams = parameters if parameters else flowParams
        frames, h, w = self.data.shape
        self.xflow = ndarray((frames-1,h,w))
        self.yflow = ndarray((frames-1,h,w))

        data = self.data
        if relative:
            f0 = percentile(self.data,10,0);
            plt.imshow(f0, cmap='gray', interpolation='nearest', vmin=f0.min(), vmax=f0.max())
            plt.title("F0"); plt.colorbar()
            data = (self.data-f0)/f0

        blurData = gauss(data, blur)
        prev = self.data[0]
        for i,curr in enumerate(blurData[1:]):
            flow = cv2.calcOpticalFlowFarneback(prev, curr, **flowParams)
            self.xflow[i] = flow[:,:,0]
            self.yflow[i] = flow[:,:,1]
            prev = curr

    def show(self, frames=[0,None], cols=3, parameters=None):
        vecParams = {'pivot':'tail', 'angles':'xy', 'scale_units':'xy', 'color':'yellow'}
        vecParams = parameters if parameters else vecParams

        if type(frames) == int:
            plt.figure(figsize=(12,12))
            plt.imshow(self.data[frames], cmap='gray')
            plt.quiver(self.xflow[frames], self.yflow[frames], **vecParams)
            return
        else:
            vmin = self.data.min()
            vmax = self.data.max()
            begf, endf = frames
            endf = endf if endf else len(self.xflow)
            rows = int(ceil((endf-begf)/float(cols)))
            fw = 13; fh = float(rows*fw)/cols
            plt.figure(figsize=(fw, fh))
            for i in range(begf, endf):
                plt.subplot(rows,cols,i-begf+1)
                plt.imshow(self.data[i], cmap='gray', interpolation='nearest', vmin=vmin, vmax=vmax); plt.colorbar()
                plt.title("Flow from frame %d to frame %d" % (i,i+1))
                plt.quiver(self.xflow[i], self.yflow[i], **vecParams)
        plt.tight_layout()

    def error(self):

        trueNorms = sqrt(self.fxTrue**2 + self.fyTrue**2)
        approxNorms = sqrt(self.xflow**2 + self.yflow**2)
        maxErr = trueNorms + approxNorms
        maxErr[abs(maxErr) < 1e-12] = 1.0
        err = sqrt((self.fxTrue-self.xflow)**2 + (self.fyTrue-self.yflow)**2)/maxErr
        print "Maximum Point-wise Error = ", err.max()
        print "Minimum Point-wise Error = ", err.min()
        frob = linalg.norm(err,'fro',(1,2))
        print "Maximum Frobenius Norm = ", frob.max()
        print "Minimum Frobenius Norm = ", frob.min()
        totErr = average(frob)
        print "Total Error = ", totErr
        return totErr


  File "<ipython-input-3-af290f6e91a5>", line 69
    self.data += (2*random.random((time, self["dim"], self["dim"]))-1)*self["signalNoise"]*self["calciumAmplitud
                                                                                                               ^
SyntaxError: EOL while scanning string literal

In [ ]: